Sieve Estimators : Consistency and Rates of Convergence
نویسندگان
چکیده
The first term is the estimation error and measures the performance of the discrimination rule with respect to the best hypothesis in H. In previous lectures, we studied performance guarantees for this quantity when ĥn is ERM. Now, we also consider the approximation error, which captures how well a class of hypotheses {Hk} approximates the Bayes decision boundary. For example, consider the histogram classifiers in Figure 1 and Figure 2, respectively. As the grid becomes more fine-grained, the class of hypotheses approximates the Bayes decision boundary with increasing accuracy. The examination of the approximation error will lead to the design of sieve estimators that perform ERM over Hk where k = k(n) grows with n at an appropriate rate such that both the approximation error and the estimation error converge to 0. Note that while the estimation error is random (because it depends on the sample), the approximation error is not random.
منابع مشابه
An invitation to quantum tomography
We describe quantum tomography as an inverse statistical problem. We show results of consistency in different norms, for Pattern Function Projection Estimators as well as for Sieve Maximum Likelihood Estimators for the density matrix of the quantum state and its Wigner function. The density matrix and the Wigner function are two different ways of representing quantum state. Results are derived ...
متن کاملNote on the Consistency of Sieve Estimators
In this note the consistency of sieve estimators is derived without requiring compactness of the entire parameter space, and allowing the expected objective function to take infinite values.
متن کاملEstimation of Nonparametric Conditional Moment Models with Possibly Nonsmooth Generalized Residuals
This paper studies nonparametric estimation of conditional moment models in which the generalized residual functions can be nonsmooth in the unknown functions of endogenous variables. This is a nonparametric nonlinear instrumental variables (IV) problem. We propose a class of penalized sieve minimum distance (PSMD) estimators which are minimizers of a penalized empirical minimum distance criter...
متن کاملSieve Bootstrap for Time Series Sieve Bootstrap for Time Series
We study a bootstrap method which is based on the method of sieves. A linear process is approximated by a sequence of autoregressive processes of order p = pn, where pn ! 1 ; p n = on as the sample size n ! 1. F or given data, we t h e n estimate such a n A R pn model and generate a bootstrap sample by resampling from the residuals. This sieve bootstrap enjoys a nice nonparametric property. We ...
متن کاملOracle Inequalities and Adaptive Rates
We have previously seen how sieve estimators give rise to rates of convergence to the Bayes risk by performing empirical risk minimization over Hk(n), where (Hk)k ≥ 1 is an increasing sequence of sets of classifiers, and k(n) → ∞. However, the rate of convergence depends on k(n). Usually this rate is chosen to minimize the worst-case rate over all distributions of interest. However, it would be...
متن کامل